Gradient methods with memory

نویسندگان

چکیده

In this paper, we consider gradient methods for minimizing smooth convex functions, which employ the information obtained at previous iterations in order to accelerate convergence towards optimal solution. This is used form of a piece-wise linear model objective function, provides us with much better prediction abilities as compared standard model. To best our knowledge, approach was never really applied Convex Minimization differentiable functions view high complexity corresponding auxiliary problems. However, show that all necessary computations can be done very efficiently. Consequently, get new optimization methods, are than usual Gradient Methods both number oracle calls and computational time. Our theoretical conclusions confirmed by preliminary experiments.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Gradient Convergence in Gradient methods with Errors

We consider the gradient method xt+1 = xt + γt(st + wt), where st is a descent direction of a function f : �n → � and wt is a deterministic or stochastic error. We assume that ∇f is Lipschitz continuous, that the stepsize γt diminishes to 0, and that st and wt satisfy standard conditions. We show that either f(xt) → −∞ or f(xt) converges to a finite value and ∇f(xt) → 0 (with probability 1 in t...

متن کامل

Adversarial Texts with Gradient Methods

Adversarial samples for images have been extensively studied in the literature. Among many of the attacking methods, gradient-based methods are both effective and easy to compute. In this work, we propose a framework to adapt the gradient attacking methods on images to text domain. The main difficulties for generating adversarial texts with gradient methods are: (i) the input space is discrete,...

متن کامل

Gradient Convergence in Gradient Methods

For the classical gradient method xt+1 = xt − γt∇f(xt) and several deterministic and stochastic variants, we discuss the issue of convergence of the gradient sequence ∇f(xt) and the attendant issue of stationarity of limit points of xt. We assume that ∇f is Lipschitz continuous, and that the stepsize γt diminishes to 0 and satisfies standard stochastic approximation conditions. We show that eit...

متن کامل

Gradient-Based Methods with Local Optimization

Multiple views of a scene can provide important information about the structure and dynamic behavior of three-dimensional objects. Many of the methods that recover this information require the determination of optical flow-the velocity, on the image, of visible points on object surfaces. An important class of techniques for estimating optical flow depend on the relationship between the gradient...

متن کامل

Gradient Methods with Adaptive Step-Sizes

Motivated by the superlinear behavior of the Barzilai-Borwein (BB) method for two-dimensional quadratics, we propose two gradient methods which adaptively choose a small step-size or a large step-size at each iteration. The small step-size is primarily used to induce a favorable descent direction for the next iteration, while the large step-size is primarily used to produce a sufficient reducti...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Optimization Methods & Software

سال: 2021

ISSN: ['1055-6788', '1026-7670', '1029-4937']

DOI: https://doi.org/10.1080/10556788.2020.1858831